CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets

نویسندگان

چکیده

Poisoning attacks are emerging threats to deep neural networks where the adversaries attempt compromise models by injecting malicious data points in clean training data. target either availability or integrity of a model. The attack aims degrade overall accuracy while causes misclassification only for specific instances without affecting Although clean-label proven be effective recent studies, feasibility remains unclear. This paper, first time, proposes approach, CLPA, poisoning attack. We reveal that due intrinsic imperfection classifiers, naturally misclassified inputs can considered as special type poisoned data, which we refer "natural data''. then propose two-phase generative adversarial net (GAN) based generation framework along with triplet loss function synthesizing samples locate similar distribution natural generated plausible human perception and also bypass singular vector decomposition (SVD) defense. demonstrate effectiveness our approach on CIFAR-10 ImageNet dataset over variety models. Codes available at: https://github.com/bxz9200/CLPA.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This f...

متن کامل

Conditional Generative Adversarial Nets

Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustr...

متن کامل

Temporal Generative Adversarial Nets

In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a ...

متن کامل

Triple Generative Adversarial Nets

Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation...

متن کامل

Tensorizing Generative Adversarial Nets

Generative Adversarial Network (GAN) and its variants exhibit state-of-the-art performance in the class of generative models. To capture higher-dimensional distributions, the common learning procedure requires high computational complexity and a large number of parameters. The problem of employing such massive framework arises when deploying it on a platform with limited computational power suc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i8.20902